Internet 0 — inter - device internetworking
نویسندگان
چکیده
ion layers also fall into redundancy traps. Each layer may need to reverify a data buffer, or re-do computation that another layer may have already performed simply because it does not have access to cross-layer information. Crosstalk can be useful in removing redundancy, or performing a jointoptimisation between layers for careful control [10]. The I0 micro-stack is an example of where information at the IP layer (traditionally encapsulated at the network layer of the ISO/OSI stack) is used to optimise packet handling at the TCP/UDP/ HTTP layers (the transport and application layers), thereby creating much tighter network code. Lastly, this type of optimisation co-exists quite nicely with Moore’s law. Even as processors get smaller and smaller, careful software optimisation means that an even smaller, cheaper processor that has a lower power draw, and is simpler to package, can be used to implement the Internet. 4. Peers do not need servers I0 devices function without the need of servers as any two nodes, via the Internet protocol, have the direct ability to talk to each other without having to go through an intermediary; two nodes can exchange information directly rather that going through a central broker to get the same information. This independence allows each node to have ownership over its state and threads of execution. This model also addresses scalability for data storage and computation through redundancy and locality in ways that a centralised system cannot [11, 12]. Centralised systems, such as the Web server/client relationship, are prone to failure at the one information source. If a Web server fails, then the Web client has no redundancy plan and is required to simply wait until the problem has been rectified. Distributed systems, such as the GNUtella file sharing network, do not have this problem. GNUtella nodes locally cache information and are redundant data sources throughout a network — when a particular piece of information is requested, the network can provide many sources, with the client even being free to choose a source which it believes will be ‘easier’ to access. When a single node on the network is removed, the rest of the network still operates without failure. The above holds true when discussing an Internet 0 device network. If all devices are required to proxy their information through a centralised node, then the entire network is prone to failure when that single node becomes overwhelmed, fails, or is under attack. Allowing a more open network where devices directly intercommunicate with specific other nodes means that failures are localised to those relationships; if a single node goes down, all that is affected are the other nodes that deal directly with it. No other state nor execution is directly affected. All this is not to say there is no room for centralisation of hierarchy in this network [13]. Google is a prime example for centralisation — the Google spider walks the entire World Wide Web indexing information and providing it so that a user can access at a single point, the Google Web site. Without Google, the WWW would continue to exist and function normally; however, Google has added a higher level service to the network that makes it more valuable. Hierarchy is too important as it solves problems where certain nodes may have very valuable information that all other nodes wish to obtain, abstraction layers also fall into redundancy traps Internet 0 — inter-device internetworking BT Technology Journal • Vol 22 No 4 • October 2004 281 but which, for engineering issues, would be too much of a burden to require a single node to disseminate it to the entire network [14, 15]. 5. Physical identity The crux of any network is the ability for nodes to be able to identify each other. Names, however, mean very different things to different people. Computers on the Internet have Internet addresses as their names, but those names are only used to specify where in the network the computer is located. Network adapters, however, do have hardware addresses that allow for unique identification between machines, but the management of such a scheme can be burdensome. Internet protocol addresses are not suitable for identification purposes because they are not doled out on the basis of physical location; rather, they are assigned based on where on the Internet hierarchy that machine currently resides. Additionally, many organisations assign internal and unroutable IP addresses to their organisation’s computers in such a way that there does exist another machine in the world with the same IP address (most network address translation (NAT) equipment obtain a single IP address on its globally routed interface, and then assign IP address from the 192.168/16 subnet to the internal machinery — therefore there does exist an approximately one in 100 000 chance that a computer inside a NAT has an address that is used by another computer in a different NAT). IP addresses are simply not globally unique, nor do they have any notion of permanence. Hardware addresses, such as those used as the media access control (MAC) address on Ethernet are, however, globally unique. To maintain such a system requires a centralised serialisation authority — the IEEE. The IEEE makes sure never to assign the same block of hardware addresses to two different parties at the cost of those parties purchasing either an ‘organisationally unique identifier’ or an ‘individual address block’ at the rate of US$1650 and US$550 respectively. This, unfortunately, locks out many experimenters and developers from creating their own network interfaces. I0 devices rely on zero configuration schemes [16] to obtain IP addresses along with a random 128-bit string as its hardware address. The use of a 128-bit string as a MAC-like address comes from the observation that the chance for collision of two IID strings of that length is approximately 1 in 1038, making it ‘mostly’ unique (MAC addresses need not actually be unique, simply unique enough that two interfaces with the same MAC address do not appear in any network smaller than two subnets bridged together, but also that it may be possible to use that string as an IPv6-like address [17] in future work. With the ability to physically identify devices, a new programming paradigm can be introduced which involves physically accessing the network nodes. There are certain operations that one may not want to expose over the network — however, forcing an operator to physically access a node to verify that he or she has permissions to perform the network access is very promising. 6. Big bits Most development in networking technology has been allocated to going as fast as possible as that means saturating all available bandwidth on a channel — for the Internet that means hardware research is devoted to faster-than-terabit Internet 2 links, while software research delves into saturating those links [18]. Unfortunately, two crucial points are easily forgotten in this race — a light bulb does not need to watch video on demand, and there are many hidden costs to pushing bits quickly. Every bit in a network has a size. Given the speed of light, transmitting one bit a second means that the bit grows to a size of 3.0 × 108 metres long. Likewise, in a gigabit network, each bit has a size of approximately 30 cm. This bit size is effectively the window of opportunity for the two devices to agree on what is being transmitted on the network. When the network is operating a very fast data rate, there are considerations such as the impulse response of the medium and impedance matching between interfaces that must be accounted for (the impulse response dictates what the onset of a bit looks like, while the impedance matching allows for efficient power transfer between media without causing an ‘echo’ of the transmitted energy to be reflected back to the source), which in turn causes the network technology to become complicated and expensive as agile radios, active splits, and efficient cabling is needed. If the network is slowed down such that a bit is larger than the structure of the network, each node is effectively operating in the near field. All the vagaries of the network do settle down on that time-scale, and the entire network reflects the value that the transmitter is sending. Less consideration needs to be made to the nonlinearities that occur at high data rates, and therefore transmitters and receivers can be constructed very simply and cheaply. 7. End-to-end modulation The end-to-end principle in systems design puts all the interaction intelligence at the edges of the network, and not in the central core. The central core is kept as agnostic to the actual transmission as possible to prevent redundancy between central nodes, and between central nodes and those involved with the communication at the end points. The Internet exhibits end-to-end design in its use of the Internet protocol — no matter what hardware is being used, or what application is being run, all of them use the same network transport allowing for flexibility because neither the hardware nor the application need know the details of the other. A nonend-to-end system would require that intermediary nodes process and interpret all the data that is streaming in, possibly reformat it, and then retransmit the data. Internet 0 relies on an end-to-end modulation scheme to transmit Internet protocol packets so therefore it is not only agnostic to which network the information it is transmitting is every bit in a network has a size Internet 0 — inter-device internetworking BT Technology Journal • Vol 22 No 4 • October 2004 282 destined for, but also the transmitter need not worry as to the actual media that the data is moving through. This is achieved by using a modulation scheme based upon impulse radio [19] where data is transmitted through time positioning of highfrequency ‘clicks’ (a 1 μs click yields saturation from DC to 1 MHz). These clicks can be passed through almost all media — through IR via the flashing of a LED, through the air via ultrasonic speakers, through AC power lines by capacitively coupling, etc. Each one of these media has very specific frequency pass-bands and other transmission characteristics; however, they can all pass a portion of the transmitted energy. As long as enough energy is received by the other end in a manner that allows for careful positioning of the onset of that energy, then this encoding scheme is appropriate. These media can also be coupled together without the need for demodulating at the terminal of one, and then remodulating at the terminal of the other — this is very similar to a Morse code operator synchronising their dots and dashes on an electrical telegraph with the flashes seen from a light coming from a ship; there is no need to actually translate those pulses into English and then back into code. A single bit is divided into two time intervals in a Manchesterlike encoding scheme — if an impulse occurs precisely in the centre of the first interval, then a 0 is being encoded. Likewise, an impulse precisely in the centre of the second interval encodes a 1. Any other impulse can be rejected as noise. These bits are then strung together in an 8N1 serial fashion with a start bit, eight data bits, and then one stop bit. Both the start and stop bits are identified as they have transitions precisely in the centre of both the first and second bit time intervals (Fig 1). This modulation scheme has the property of being able to reject spurious transitions as they will be incommensurate to the rest of the byte; if a click appears in a place where one is not expected, then that bit can be easily thrown out and rejected. Similarly to most UWB systems, a spreading code can be used for additional noise rejection [20, 21] through the careful positioning of the start of each byte click sequence (there do exist simple implementations of the spreading encoders and decoders that can be used in an I0 device [22]). The onset of each byte is dictated by the spreading code, and not the positions of each bit, to allow a transmitter to use a spreading code if desired, but not dictate that the receiver use one; a receiver can receive an I0 click sequence which has had the onset of each byte positioned through spreading, and simply ignore that additional piece of information and decode as before. Additionally, this scheme has no specification of how quickly or how slowly the transitions can be sent — the only item gating their speed is the impulse response of the system. This self-clocking specification is appropriate to being run at terahertz speed for on-chip communication, or millihertz speed for encoding into the waves of an ocean. Finally, it is also promising in being able to allow multiple transmitters to share the same channel as a receiver with enough computational power can separate out multiple transmitters based solely on the click interval they are each using. A transmitter can simply pick a random click interval, and then blindly transmit on the channel — if the receiver knows exactly what click interval it is looking for then it can reject all other impulses as noise, or it can simultaneously decode all the data, and sort through all the incoming data based on click length.
منابع مشابه
TurfNet: An Architecture for Dynamically Composable Networks
The Internet architecture is based on design principles such as endto-end addressing and global routeability. It suits relatively static, wellmanaged and flat network hierarchies. Recent years have shown, however, that the Internet is evolving beyond what the current architecture can support. The Internet architecture struggles to support increasingly conflicting requirements from groups with c...
متن کاملVirtual mobility domains - A mobility architecture for the future Internet
The advances in hardware and wireless technologies have made mobile communication devices affordable by a vast user community. With the advent of rich multimedia and social networking content, an influx of myriads of applications, and Internet supported services, there is an increasing user demand for the Internet connectivity anywhere and anytime. Mobility management is thus a crucial requirem...
متن کاملA Pattern-based Ontology for the Internet of Things
The Internet of Things (IoT) is about inter-networking real word objects in order to foster data exchange and communication among things. In this work we present the IoT Application Profile (IoT-AP) ontology with a particular focus on the pattern-based design methodology used for modelling the ontology.
متن کاملApplication of Mobile 1P to Tactical Mobile Internetworking
Mobile Internetworking Protocol or Mobile 1P, proposed in the Internet Engineering Task Force (IETF), does not consider the requirements of tactical military networks, which are predominantly radio based and “on the move” with minimal fixed infrastructure. In this paper, we present a tactical Mobile 1P solution for the military architecture, specifically the Radio Access Point (RAP) network, wi...
متن کاملImplementation and Analysis of IP Multicast over ATM
Today it is evident that ATM technology will have its role in the future of Internetworking. Two major Internet backbone service providers, Sprint and MCI, have marketed their investments in ATM as a way to push more bandwidth into their Internet in-frastructures. Regardless of whether ATM technology will thrive in the WAN or LAN, it is also evident that the Internet Protocol (IP), which has be...
متن کاملProjekt Internetworking und E-Learning
Ziel. Dieses DFG-Projekt befasst sich mit „Internetworking“ zu den Schwerpunkten (A) Strukturen des Internet, (B) Kommunikationsbeziehungen im Internet und (C) Informationssicherheit im Internet. Dabei verstehen wir unter dem Begriff Internetworking den Zusammenschluss von eigenständigen, möglicherweise heterogenen Rechnernetzen zu einem transparent zu verwendenden Rechnernetz sowie Anwendungen...
متن کامل